Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
PLOS global public health ; 2(7), 2022.
Article in English | EuropePMC | ID: covidwho-2248894

ABSTRACT

This study uses two existing data sources to examine how patients' symptoms can be used to differentiate COVID-19 from other respiratory diseases. One dataset consisted of 839,288 laboratory-confirmed, symptomatic, COVID-19 positive cases reported to the Centers for Disease Control and Prevention (CDC) from March 1, 2019, to September 30, 2020. The second dataset provided the controls and included 1,814 laboratory-confirmed influenza positive, symptomatic cases, and 812 cases with symptomatic influenza-like-illnesses. The controls were reported to the Influenza Research Database of the National Institute of Allergy and Infectious Diseases (NIAID) between January 1, 2000, and December 30, 2018. Data were analyzed using case-control study design. The comparisons were done using 45 scenarios, with each scenario making different assumptions regarding prevalence of COVID-19 (2%, 4%, and 6%), influenza (0.01%, 3%, 6%, 9%, 12%) and influenza-like-illnesses (1%, 3.5% and 7%). For each scenario, a logistic regression model was used to predict COVID-19 from 2 demographic variables (age, gender) and 10 symptoms (cough, fever, chills, diarrhea, nausea and vomiting, shortness of breath, runny nose, sore throat, myalgia, and headache). The 5-fold cross-validated Area under the Receiver Operating Curves (AROC) was used to report the accuracy of these regression models. The value of various symptoms in differentiating COVID-19 from influenza depended on a variety of factors, including (1) prevalence of pathogens that cause COVID-19, influenza, and influenza-like-illness;(2) age of the patient, and (3) presence of other symptoms. The model that relied on 5-way combination of symptoms and demographic variables, age and gender, had a cross-validated AROC of 90%, suggesting that it could accurately differentiate influenza from COVID-19. This model, however, is too complex to be used in clinical practice without relying on computer-based decision aid. Study results encourage development of web-based, stand-alone, artificial Intelligence model that can interview patients and help clinicians make quarantine and triage decisions.

2.
JMIR Form Res ; 7: e37550, 2023 Mar 15.
Article in English | MEDLINE | ID: covidwho-2280122

ABSTRACT

BACKGROUND: The COVID-19 pandemic has affected people's lives beyond severe and long-term physical health symptoms. Social distancing and quarantine have led to adverse mental health outcomes. COVID-19-induced economic setbacks have also likely exacerbated the psychological distress affecting broader aspects of physical and mental well-being. Remote digital health studies can provide information about the pandemic's socioeconomic, mental, and physical impact. COVIDsmart was a collaborative effort to deploy a complex digital health research study to understand the impact of the pandemic on diverse populations. We describe how digital tools were used to capture the effects of the pandemic on the overall well-being of diverse communities across large geographical areas within the state of Virginia. OBJECTIVE: The aim is to describe the digital recruitment strategies and data collection tools applied in the COVIDsmart study and share the preliminary study results. METHODS: COVIDsmart conducted digital recruitment, e-Consent, and survey collection through a Health Insurance Portability and Accountability Act-compliant digital health platform. This is an alternative to the traditional in-person recruitment and onboarding method used for studies. Participants in Virginia were actively recruited over 3 months using widespread digital marketing strategies. Six months of data were collected remotely on participant demographics, COVID-19 clinical parameters, health perceptions, mental and physical health, resilience, vaccination status, education or work functioning, social or family functioning, and economic impact. Data were collected using validated questionnaires or surveys, completed in a cyclical fashion and reviewed by an expert panel. To retain a high level of engagement throughout the study, participants were incentivized to stay enrolled and complete more surveys to further their chances of receiving a monthly gift card and one of multiple grand prizes. RESULTS: Virtual recruitment demonstrated relatively high rates of interest in Virginia (N=3737), and 782 (21.1%) consented to participate in the study. The most successful recruitment technique was the effective use of newsletters or emails (n=326, 41.7%). The primary reason for contributing as a study participant was advancing research (n=625, 79.9%), followed by the need to give back to their community (n=507, 64.8%). Incentives were only reported as a reason among 21% (n=164) of the consented participants. Overall, the primary reason for contributing as a study participant was attributed to altruism at 88.6% (n=693). CONCLUSIONS: The COVID-19 pandemic has accelerated the need for digital transformation in research. COVIDsmart is a statewide prospective cohort to study the impact of COVID-19 on Virginians' social, physical, and mental health. The study design, project management, and collaborative efforts led to the development of effective digital recruitment, enrollment, and data collection strategies to evaluate the pandemic's effects on a large, diverse population. These findings may inform effective recruitment techniques across diverse communities and participants' interest in remote digital health studies.

3.
Qual Manag Health Care ; 32(Suppl 1): S11-S20, 2023.
Article in English | MEDLINE | ID: covidwho-2238511

ABSTRACT

BACKGROUND AND OBJECTIVE: At-home rapid antigen tests provide a convenient and expedited resource to learn about severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection status. However, low sensitivity of at-home antigen tests presents a challenge. This study examines the accuracy of at-home tests, when combined with computer-facilitated symptom screening. METHODS: The study used primary data sources with data collected during 2 phases at different periods (phase 1 and phase 2): one during the period in which the alpha variant of SARS-CoV-2 was predominant in the United States and another during the surge of the delta variant. Four hundred sixty-one study participants were included in the analyses from phase 1 and 374 subjects from phase 2. Phase 1 data were used to develop a computerized symptom screening tool, using ordinary logistic regression with interaction terms, which predicted coronavirus disease-2019 (COVID-19) reverse transcription polymerase chain reaction (RT-PCR) test results. Phase 2 data were used to validate the accuracy of predicting COVID-19 diagnosis with (1) computerized symptom screening; (2) at-home rapid antigen testing; (3) the combination of both screening methods; and (4) the combination of symptom screening and vaccination status. The McFadden pseudo-R2 was used as a measure of percentage of variation in RT-PCR test results explained by the various screening methods. RESULTS: The McFadden pseudo-R2 for the first at-home test, the second at-home test, and computerized symptom screening was 0.274, 0.140, and 0.158, respectively. Scores between 0.2 and 0.4 indicated moderate levels of accuracy. The first at-home test had low sensitivity (0.587) and high specificity (0.989). Adding a second at-home test did not improve the sensitivity of the first test. Computerized symptom screening improved the accuracy of the first at-home test (added 0.131 points to sensitivity and 6.9% to pseudo-R2 of the first at-home test). Computerized symptom screening and vaccination status was the most accurate method to screen patients for COVID-19 or an active infection with SARS-CoV-2 in the community (pseudo-R2 = 0.476). CONCLUSION: Computerized symptom screening could either improve, or in some situations, replace at-home antigen tests for those individuals experiencing COVID-19 symptoms.


Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , COVID-19/epidemiology , SARS-CoV-2 , COVID-19 Testing , Sensitivity and Specificity
4.
Qual Manag Health Care ; 32(Suppl 1): S3-S10, 2023.
Article in English | MEDLINE | ID: covidwho-2191200

ABSTRACT

BACKGROUND AND OBJECTIVES: This article describes how multisystemic symptoms, both respiratory and nonrespiratory, can be used to differentiate coronavirus disease-2019 (COVID-19) from other diseases at the point of patient triage in the community. The article also shows how combinations of symptoms could be used to predict the probability of a patient having COVID-19. METHODS: We first used a scoping literature review to identify symptoms of COVID-19 reported during the first year of the global pandemic. We then surveyed individuals with reported symptoms and recent reverse transcription polymerase chain reaction (RT-PCR) test results to assess the accuracy of diagnosing COVID-19 from reported symptoms. The scoping literature review, which included 81 scientific articles published by February 2021, identified 7 respiratory, 9 neurological, 4 gastrointestinal, 4 inflammatory, and 5 general symptoms associated with COVID-19 diagnosis. The likelihood ratio associated with each symptom was estimated from sensitivity and specificity of symptoms reported in the literature. A total of 483 individuals were then surveyed to validate the accuracy of predicting COVID-19 diagnosis based on patient symptoms using the likelihood ratios calculated from the literature review. Survey results were weighted to reflect age, gender, and race of the US population. The accuracy of predicting COVID-19 diagnosis from patient-reported symptoms was assessed using area under the receiver operating curve (AROC). RESULTS: In the community, cough, sore throat, runny nose, dyspnea, and hypoxia, by themselves, were not good predictors of COVID-19 diagnosis. A combination of cough and fever was also a poor predictor of COVID-19 diagnosis (AROC = 0.56). The accuracy of diagnosing COVID-19 based on symptoms was highest when individuals presented with symptoms from different body systems (AROC of 0.74-0.81); the lowest accuracy was when individuals presented with only respiratory symptoms (AROC = 0.48). CONCLUSIONS: There are no simple rules that clinicians can use to diagnose COVID-19 in the community when diagnostic tests are unavailable or untimely. However, triage of patients to appropriate care and treatment can be improved by reviewing the combinations of certain types of symptoms across body systems.


Subject(s)
COVID-19 , Humans , Cough/diagnosis , Cough/etiology , COVID-19/diagnosis , COVID-19 Testing , SARS-CoV-2 , Triage
5.
Qual Manag Health Care ; 32(Suppl 1): S35-S44, 2023.
Article in English | MEDLINE | ID: covidwho-2191198

ABSTRACT

BACKGROUND AND OBJECTIVES: Although at-home coronavirus disease-2019 (COVID-19) testing offers several benefits in a relatively cost-effective and less risky manner, evidence suggests that at-home COVID-19 test kits have a high rate of false negatives. One way to improve the accuracy and acceptance of COVID-19 screening is to combine existing at-home physical test kits with an easily accessible, electronic, self-diagnostic tool. The objective of the current study was to test the acceptability and usability of an artificial intelligence (AI)-enabled COVID-19 testing tool that combines a web-based symptom diagnostic screening survey and a physical at-home test kit to test differences across adults from varying races, ages, genders, educational, and income levels in the United States. METHODS: A total of 822 people from Richmond, Virginia, were included in the study. Data were collected from employees and patients of Virginia Commonwealth University Health Center as well as the surrounding community in June through October 2021. Data were weighted to reflect the demographic distribution of patients in United States. Descriptive statistics and repeated independent t tests were run to evaluate the differences in the acceptability and usability of an AI-enabled COVID-19 testing tool. RESULTS: Across all participants, there was a reasonable degree of acceptability and usability of the AI-enabled COVID-19 testing tool that included a physical test kit and symptom screening website. The AI-enabled COVID-19 testing tool demonstrated overall good acceptability and usability across race, age, gender, and educational background. Notably, participants preferred both components of the AI-enabled COVID-19 testing tool to the in-clinic testing. CONCLUSION: Overall, these findings suggest that our AI-enabled COVID-19 testing approach has great potential to improve the quality of remote COVID testing at low cost and high accessibility for diverse demographic populations in the United States.


Subject(s)
COVID-19 , Humans , Adult , Male , Female , United States , COVID-19/diagnosis , COVID-19 Testing , Artificial Intelligence , Surveys and Questionnaires
6.
J Clin Transl Sci ; 6(1): e128, 2022.
Article in English | MEDLINE | ID: covidwho-2132876

ABSTRACT

Public distrust in the US pandemic response has significantly hindered its effectiveness. In this community-based participatory research mixed-methods study, based on two datasets, we examined how distrust in COVID-19 vaccines relates to institutional distrust. We found that the Johnson & Johnson vaccine pause undermined trust in COVID-19 vaccines in general. Findings also suggest that vaccine distrust developed after participating in a study on COVID-19 testing. Increased distrust may be an unintended consequence of how healthcare and public health activities are presented and delivered, and research participation is structured. Both will continue without proactively addressing the root causes of distrust.

7.
PLOS Glob Public Health ; 2(7): e0000221, 2022.
Article in English | MEDLINE | ID: covidwho-2021475

ABSTRACT

This study uses two existing data sources to examine how patients' symptoms can be used to differentiate COVID-19 from other respiratory diseases. One dataset consisted of 839,288 laboratory-confirmed, symptomatic, COVID-19 positive cases reported to the Centers for Disease Control and Prevention (CDC) from March 1, 2019, to September 30, 2020. The second dataset provided the controls and included 1,814 laboratory-confirmed influenza positive, symptomatic cases, and 812 cases with symptomatic influenza-like-illnesses. The controls were reported to the Influenza Research Database of the National Institute of Allergy and Infectious Diseases (NIAID) between January 1, 2000, and December 30, 2018. Data were analyzed using case-control study design. The comparisons were done using 45 scenarios, with each scenario making different assumptions regarding prevalence of COVID-19 (2%, 4%, and 6%), influenza (0.01%, 3%, 6%, 9%, 12%) and influenza-like-illnesses (1%, 3.5% and 7%). For each scenario, a logistic regression model was used to predict COVID-19 from 2 demographic variables (age, gender) and 10 symptoms (cough, fever, chills, diarrhea, nausea and vomiting, shortness of breath, runny nose, sore throat, myalgia, and headache). The 5-fold cross-validated Area under the Receiver Operating Curves (AROC) was used to report the accuracy of these regression models. The value of various symptoms in differentiating COVID-19 from influenza depended on a variety of factors, including (1) prevalence of pathogens that cause COVID-19, influenza, and influenza-like-illness; (2) age of the patient, and (3) presence of other symptoms. The model that relied on 5-way combination of symptoms and demographic variables, age and gender, had a cross-validated AROC of 90%, suggesting that it could accurately differentiate influenza from COVID-19. This model, however, is too complex to be used in clinical practice without relying on computer-based decision aid. Study results encourage development of web-based, stand-alone, artificial Intelligence model that can interview patients and help clinicians make quarantine and triage decisions.

SELECTION OF CITATIONS
SEARCH DETAIL